Skip to content

Conversation

@Sinscerly
Copy link

Description

This PR adds Prometheus metadata to all metrics types. Therefor Prometheus can include the TYPE and HELP information when scraping the endpoint.

See more details in #12110.

Fixes: #12110

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • Build/CI
  • Test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

Bug Severity

  • BLOCKER
  • Critical
  • Major
  • Minor
  • Trivial

Screenshots (if appropriate):

How Has This Been Tested?

I've tested the code change with a simplistic setup (didn't got all running, @abh1sar could you test it together more fully with your extra Prometheus metrics as my dev environment is still not working correctly)
I've successfully seen that the TYPE and HELP information is added to the nicely sorted list of metrics the exporter provides.

How did you try to break this feature and the system with this change?

I've enabled the Prometheus exporter and queried the endpoint multiple times to see the data came out like this:

# Cloudstack Prometheus Metrics
# HELP cloudstack_domain_limit_cpu_cores_total Total CPU core limit across all domains
# TYPE cloudstack_domain_limit_cpu_cores_total gauge
cloudstack_domain_limit_cpu_cores_total 0
# HELP cloudstack_domain_limit_memory_mibs_total Total memory limit in MiB across all domains
# TYPE cloudstack_domain_limit_memory_mibs_total gauge
cloudstack_domain_limit_memory_mibs_total 0
# HELP cloudstack_domain_resource_count Resource usage count per domain
# TYPE cloudstack_domain_resource_count gauge
cloudstack_domain_resource_count{domain="/", type="memory"} 0
cloudstack_domain_resource_count{domain="/", type="cpu"} 0
cloudstack_domain_resource_count{domain="/", type="gpu"} 0
cloudstack_domain_resource_count{domain="/", type="primary_storage"} 0

@boring-cyborg
Copy link

boring-cyborg bot commented Nov 20, 2025

Congratulations on your first Pull Request and welcome to the Apache CloudStack community! If you have any issues or are unsure about any anything please check our Contribution Guide (https://github.com/apache/cloudstack/blob/main/CONTRIBUTING.md)
Here are some useful points:

@DaanHoogland
Copy link
Contributor

@blueorangutan package

@blueorangutan
Copy link

@DaanHoogland a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@codecov
Copy link

codecov bot commented Nov 21, 2025

Codecov Report

❌ Patch coverage is 0% with 43 lines in your changes missing coverage. Please review.
✅ Project coverage is 17.56%. Comparing base (6dc259c) to head (8851dd9).
⚠️ Report is 86 commits behind head on main.

Files with missing lines Patch % Lines
...che/cloudstack/metrics/PrometheusExporterImpl.java 0.00% 43 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff            @@
##               main   #12112   +/-   ##
=========================================
  Coverage     17.55%   17.56%           
- Complexity    15535    15541    +6     
=========================================
  Files          5911     5911           
  Lines        529359   529377   +18     
  Branches      64655    64656    +1     
=========================================
+ Hits          92949    92993   +44     
+ Misses       425952   425924   -28     
- Partials      10458    10460    +2     
Flag Coverage Δ
uitests 3.58% <ø> (ø)
unittests 18.63% <0.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@blueorangutan
Copy link

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ el10 ✔️ debian ✔️ suse15. SL-JID 15807

@DaanHoogland
Copy link
Contributor

@blueorangutan test

@blueorangutan
Copy link

@DaanHoogland a [SL] Trillian-Jenkins test job (ol8 mgmt + kvm-ol8) has been kicked to run smoke tests

@blueorangutan
Copy link

[SF] Trillian Build Failed (tid-14851)

if (!item.name.equals(currentMetricName)) {
currentMetricName = item.name;
stringBuilder.append("# HELP ").append(currentMetricName).append(" ")
.append(item.getHelp()).append("\n");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Sinscerly - I have a question around the metrics using tags in which you have modified the help text - if not mistaken the text won't be updated as the modification will happen after invoking item.toMetricsString() so at this line the previous help text will be displayed, is this correct?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @nvazquez,

The HELP and TYPE only have to be added before each group of metrics. Only once for cloudstack_domain_resource_count, regardless of how many items with different tags share that metric name.

Does this clarify your question?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Sinscerly sorry for the delay. My concern was only for the metrics that override the help field on the item.toMetricsString() call.

For example:
On the metrics iteration, when creating the metrics string for ItemHostCpu the string will always start with the default HELP text (Host CPU usage in MHz) despite having tags or not - I see in case the metric contains tags it should display Host CPU usage in MHz grouped by host tags. Please correct me if I'm wrong

Additionally, I think this if block if (!item.name.equals(currentMetricName)) can be removed

Copy link
Author

@Sinscerly Sinscerly Dec 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @nvazquez,

Well the item ItemHostCpu returns two different metrics: cloudstack_host_cpu_usage_mhz_total and cloudstack_host_cpu_usage_mhz_total_by_tag. I just made sure the correct help line is outputted for that metric.

In the end all metrics are sorted and each unique help line is added. so this will give:

# HELP cloudstack_host_cpu_usage_mhz_total <help description>
... metrics for above type
... ''
# HELP cloudstack_host_cpu_usage_mhz_total_by_tag <help description>
... metrics for above type

So it seemed logical for me to also change that. Although I'm doubting it will work, as the toMetricsString is called after adding in the HELP and TYPE I see. Although this would require some bigger rework of adding metrics I think, split it up in separate items.


Removing the if (!item.name.equals(currentMetricName)) will cause harm as you only want it once for every unique metric name. As you want:

# HELP cloudstack_domain_resource_count Resource usage count per domain
# TYPE cloudstack_domain_resource_count gauge
cloudstack_domain_resource_count{domain="/", type="memory"} 0
cloudstack_domain_resource_count{domain="/", type="cpu"} 0
cloudstack_domain_resource_count{domain="/", type="gpu"} 0
cloudstack_domain_resource_count{domain="/", type="primary_storage"} 0

And not the HELP and TYPE for every single metric item:

# HELP cloudstack_domain_resource_count Resource usage count per domain
# TYPE cloudstack_domain_resource_count gauge
cloudstack_domain_resource_count{domain="/", type="memory"} 0
# HELP cloudstack_domain_resource_count Resource usage count per domain
# TYPE cloudstack_domain_resource_count gauge
cloudstack_domain_resource_count{domain="/", type="cpu"} 0
# HELP cloudstack_domain_resource_count Resource usage count per domain
# TYPE cloudstack_domain_resource_count gauge
cloudstack_domain_resource_count{domain="/", type="gpu"} 0
# HELP cloudstack_domain_resource_count Resource usage count per domain
# TYPE cloudstack_domain_resource_count gauge
cloudstack_domain_resource_count{domain="/", type="primary_storage"} 0

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Sinscerly please ignore my second comment then :)

About the first one, do you think that can be addressed on a separate PR, leaving this PR as it is now?

@blueorangutan
Copy link

[SF] Trillian Build Failed (tid-14866)

@blueorangutan
Copy link

[SF] Trillian test result (tid-14870)
Environment: kvm-ol8 (x2), zone: Advanced Networking with Mgmt server ol8
Total time taken: 49860 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr12112-t14870-kvm-ol8.zip
Smoke tests completed. 150 look OK, 0 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File

@DaanHoogland
Copy link
Contributor

@kiranchavala @NuxRo , can you guys have a look at this?

@DaanHoogland
Copy link
Contributor

@Sinscerly , given comment #12112 (comment), will you work on this more or do you want this merged as is? cc @kiranchavala @NuxRo @nvazquez

Copy link
Contributor

@NuxRo NuxRo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tested briefly, LGTM, good job @Sinscerly

@DaanHoogland
Copy link
Contributor

@kiranchavala @nvazquez @Sinscerly , will we merge as is and addresst the rest in further issues ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Prometheus metadata is missing for the prometheus exporter

5 participants